-
Notifications
You must be signed in to change notification settings - Fork 315
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Get single point surface datasets from subset_data rather than mksurfdata #1812
Conversation
Add outputs for annual crop sowing and harvest dates Added annual outputs of sowing and harvest dates (`SDATES` and `HDATES`, respectively). This should simplify the determination of sowing and harvest date for postprocessing. - Sowing dates are on new dimension `mxgrowseas` (maximum number of growing seasons allowed to begin in a year; currently hard-coded to 1). - Harvest dates are on new dimension `mxharvests` (maximum number of harvests allowed in a year), which is `mxgrowseas`+1. This is needed because in year Y you might harvest a field that was planted in year Y-1, then plant and harvest again. - The lengths of these dimensions are public constants of `clm_varpar`. Additionally, removed `cropplant` as discussed here: <ESCOMP#1500 (comment)>. The `mxharvests` concept enables the addition of more such outputs to further simplify crop postprocessing—for example, yield per growing season as a direct output rather than needing to cross-reference daily grain mass against the day of harvest. These changes involved some rework to the crop phenology code that changes answers for crop phenology in some circumstances. All answer changes were determined to be due to issues / oddities in the old code. See discussion in ESCOMP#1616 for details (especially see ESCOMP#1616 (comment)). To summarize briefly, some issues with the old code were: - Non-winter-cereal patches that had live crops at the beginning of the year did not get planted later that year. - There was some odd behavior for rice patches at exactly 0 deg latitude - Crop root depth had unexpected values outside the growing season; now root depth is set to 0 outside the growing season
…hing else, rather than a hardcoded 100
Fix accumulation variables when changing model time step Accumulation variables (e.g., 1-day or 10-day averages) were writing and reading their accumulation period (expressed in time steps) to the restart file. This caused incorrect behavior when changing the model time step relative to what was used to create the initial conditions file (typically a 30-minute time step). So, for example, if you are using a 15-minute time step with an initial conditions file that originated from a run with a 30-minute time step (at some point in its history), then an average that was supposed to be 10-day instead becomes 5-day; an average that was supposed to be 1-day becomes 12-hour, etc. (The issue is that the number of time steps in the averaging period was staying fixed rather than the actual amount of time staying fixed.) For our out-of-the-box initial conditions files, this only impacts runs that use something other than a 30-minute time step. Typically this situation arises in configurations with an active atmospheric model that is running at higher resolution than approximately 1 degree. It appears that the biggest impacts are on VOC emissions and in BGC runs; we expect the impact to be small (but still non-zero) in prescribed phenology (SP) runs that don't use VOC emissions. This tag fixes this issue by no longer writing or reading accumulation variables' PERIOD to / from the restart file: this isn't actually needed on the restart file. See some discussion in ESCOMP#1789 for more details, and see ESCOMP#1802 (comment) for some discussion of outstanding weirdness that can result for accumulation variables when changing the model time step. The summary of that comment is: There could be some weirdness at the start of a run, but at least for a startup or hybrid run, that weirdness should work itself out within about the first averaging period. A branch or restart run could have some longer-term potential weirdness, so for now I think we should recommend that people NOT change the time step on a branch or restart run. With (significant?) additional work, we could probably avoid this additional weirdness, but my feeling is that it isn't worth the effort right now. In any case, I feel like my proposed fix will bring things much closer to being correct than they currently are when changing the time step.
Fix accumulation variables when changing model time step Accumulation variables (e.g., 1-day or 10-day averages) were writing and reading their accumulation period (expressed in time steps) to the restart file. This caused incorrect behavior when changing the model time step relative to what was used to create the initial conditions file (typically a 30-minute time step). So, for example, if you are using a 15-minute time step with an initial conditions file that originated from a run with a 30-minute time step (at some point in its history), then an average that was supposed to be 10-day instead becomes 5-day; an average that was supposed to be 1-day becomes 12-hour, etc. (The issue is that the number of time steps in the averaging period was staying fixed rather than the actual amount of time staying fixed.) For our out-of-the-box initial conditions files, this only impacts runs that use something other than a 30-minute time step. Typically this situation arises in configurations with an active atmospheric model that is running at higher resolution than approximately 1 degree. It appears that the biggest impacts are on VOC emissions and in BGC runs; we expect the impact to be small (but still non-zero) in prescribed phenology (SP) runs that don't use VOC emissions. This tag fixes this issue by no longer writing or reading accumulation variables' PERIOD to / from the restart file: this isn't actually needed on the restart file. See some discussion in ESCOMP#1789 for more details, and see ESCOMP#1802 (comment) for some discussion of outstanding weirdness that can result for accumulation variables when changing the model time step. The summary of that comment is: There could be some weirdness at the start of a run, but at least for a startup or hybrid run, that weirdness should work itself out within about the first averaging period. A branch or restart run could have some longer-term potential weirdness, so for now I think we should recommend that people NOT change the time step on a branch or restart run. With (significant?) additional work, we could probably avoid this additional weirdness, but my feeling is that it isn't worth the effort right now. In any case, I feel like my proposed fix will bring things much closer to being correct than they currently are when changing the time step.
Minor changes to python scripts and usermod_dirs for NEON cases. Also update the lightning mesh file so that it goes with the smaller lightning file. Have NEON use new use-cases for 2018 and 2018-PD conditions for CLM. Have NEON Agricultural sites run with prognostic crop. Simple fix for warning about NaN's in import/export data from/to coupler. Get NEON tests working on izumi, add --inputdata-dir to subset_data and modify_singlept_site_neon.py so they aren't tied to only running on cheyenne. Also update MOSART with fixed for direct_to_outlet option. Add error checking in ParitionWoodFluxes. Fix value of albgrd_col in SurfaceAlbefdoType.F90. Previously, the wrong value (albgri_col) was being set in InitHistory Conflicts: Externals.cfg
This set of changes allows CTSM to continue API compatability with changes to the FATES API. FATES has updated its nutrient dynamics routine, and required a modification to the test environment, some minor updates to variable dimensions in the history, and a call to a new FATES history routine. Implicitly, the updating of the FATES tag introduces new content in the FATES model since the last API update (mostly bug fixes).
…fsurdat_in/fsurdat_out files it fails
…fig file so that it will work
… can be tested, fix some lint and black issues
…nd the other to set an existing output file
… error if the file does not exist
… echoed when the urban datasets are made
…s always used, which will make it easier to remove the use of 16pft versions which we do want to do
…se the 78pft surface datasets
… list that also has specific resolutions
This set of changes allows CTSM to continue API compatability with changes to the FATES API. FATES has updated its nutrient dynamics routine, and required a modification to the test environment, some minor updates to variable dimensions in the history, and a call to a new FATES history routine. Implicitly, the updating of the FATES tag introduces new content in the FATES model since the last API update (mostly bug fixes).
…surdat_4_1x1_from_subset
All testing is good other than the FSURDATMODIFYCTSM_D_Mmpi-serial_Ld1.5x5_amazon.I2000Clm50SpRs.cheyenne_intel test. There's an additional command line option that I need to give for it. I'm also working on it to make a failed test more obvious to debug. I also found out that if you run once, it creates the output file and then aborts as it says the file is already created. So we should add an "--overwrite" option to overwrite the file if it already exists. I could add embedded code to remove the file if it exists, but the overwrite option would make it more obvious what it is doing. |
…ved if the test previously failed
…istently start with read_cfg_
OK, tests that are the same as ctsm5.1.dev115 the previous tag are the following 39: Note, that very few of these are BGC cases because the Maintenance Respiration difference will make a difference to most BGC cases. But, FATES and SP cases are more likely to be the same.
The are all short (the longest is 6-months). The single point cases aren't in the list as expected. Most are older physics as only the CLM51 cases will feel the zetamaxstable change. There are only four CLM51 tests that show a difference in answers. I'm guessing the grids are small enough and they are run short enough that the zetamaxstable difference just doesn't happen to show up.
There's 35 total CLM51 tests, so 31 of them showed differences in answers. |
Description of changes
Get the single point surface datasets to be created using subset_data rather than mksurfdata.
Specific notes
Contributors other than yourself, if any: @negin513
CTSM Issues Fixed (include github issue #):
Fixes #1676
Fixes #1674
Fixes #1809
Fixes #1941
Fixes #1942
Fixes #1924
Are answers expected to change (and if so in what way)? Yes (for single point sites)
Any User Interface Changes (namelist or namelist defaults changes)? Yes
Testing performed, if any: just running make for single point dataset files in tools/mksurfdata_map